Goto

Collaborating Authors

 critical situation


The Design of Informative Take-Over Requests for Semi-Autonomous Cyber-Physical Systems: Combining Spoken Language and Visual Icons in a Drone-Controller Setting

Gundappa, Ashwini, Ellsiepen, Emilia, Schmitz, Lukas, Wiehr, Frederik, Demberg, Vera

arXiv.org Artificial Intelligence

The question of how cyber-physical systems should interact with human partners that can take over control or exert oversight is becoming more pressing, as these systems are deployed for an ever larger range of tasks. Drawing on the literatures on handing over control during semi-autonomous driving and human-robot interaction, we propose a design of a take-over request that combines an abstract pre-alert with an informative TOR: Relevant sensor information is highlighted on the controller's display, while a spoken message verbalizes the reason for the TOR. We conduct our study in the context of a semi-autonomous drone control scenario as our testbed. The goal of our online study is to assess in more detail what form a language-based TOR should take. Specifically, we compare a full sentence condition to shorter fragments, and test whether the visual highlighting should be done synchronously or asynchronously with the speech. Participants showed a higher accuracy in choosing the correct solution with our bi-modal TOR and felt that they were better able to recognize the critical situation. Using only fragments in the spoken message rather than full sentences did not lead to improved accuracy or faster reactions. Also, synchronizing the visual highlighting with the spoken message did not result in better accuracy and response times were even increased in this condition.


Bridging the Gap: Regularized Reinforcement Learning for Improved Classical Motion Planning with Safety Modules

Goldsztejn, Elias, Brafman, Ronen I.

arXiv.org Artificial Intelligence

Classical navigation planners can provide safe navigation, albeit often suboptimally and with hindered human norm compliance. ML-based, contemporary autonomous navigation algorithms can imitate more natural and humancompliant navigation, but usually require large and realistic datasets and do not always provide safety guarantees. We present an approach that leverages a classical algorithm to guide reinforcement learning. This greatly improves the results and convergence rate of the underlying RL algorithm and requires no human-expert demonstrations to jump-start the process. Additionally, we incorporate a practical fallback system that can switch back to a classical planner to ensure safety. The outcome is a sample efficient ML approach for mobile navigation that builds on classical algorithms, improves them to ensure human compliance, and guarantees safety.


Inverse Universal Traffic Quality -- a Criticality Metric for Crowded Urban Traffic Scenes

Schütt, Barbara, Zipfl, Maximilian, Zöllner, J. Marius, Sax, Eric

arXiv.org Artificial Intelligence

An essential requirement for scenario-based testing the identification of critical scenes and their associated scenarios. However, critical scenes, such as collisions, occur comparatively rarely. Accordingly, large amounts of data must be examined. A further issue is that recorded real-world traffic often consists of scenes with a high number of vehicles, and it can be challenging to determine which are the most critical vehicles regarding the safety of an ego vehicle. Therefore, we present the inverse universal traffic quality, a criticality metric for urban traffic independent of predefined adversary vehicles and vehicle constellations such as intersection trajectories or car-following scenarios. Our metric is universally applicable for different urban traffic situations, e.g., intersections or roundabouts, and can be adjusted to certain situations if needed. Additionally, in this paper, we evaluate the proposed metric and compares its result to other well-known criticality metrics of this field, such as time-to-collision or post-encroachment time.


Introduction to Self Driving Cars

#artificialintelligence

Originally published on Towards AI the World's Leading AI and Technology News and Media Company. If you are building an AI-related product or service, we invite you to consider becoming an AI sponsor. At Towards AI, we help scale AI and technology startups. Let us help you unleash your technology to the masses. Self Driving Cars also called autonomous cars, are capable of driving with little or no input from the driver.


AI safety system offers autonomous vehicle drivers seven seconds warning

#artificialintelligence

A team of researchers in Germany have come up with a safety system that could warn drivers of autonomous cars that they will have to take control up to seven seconds in advance. A team of researchers at the Technical University of Munich (TUM) has developed a new early warning system for autonomous vehicles that uses artificial intelligence to learn from thousands of real traffic situations. The study of the system was carried out in cooperation with the BMW Group. Researchers behind the study claim that if used in today's self-driving vehicles, it could offer seven seconds advanced warning against potentially critical situations that the cars cannot handle alone – with over 85 per cent accuracy. To make self-driving cars safe in the future, development efforts often rely on sophisticated models aimed at giving cars the ability to analyse the behaviour of other traffic.


Researchers design AI-based early warning system for autonomous cars

#artificialintelligence

With massive importance being given to passenger safety in self-driving cars, a squad of researchers from the Technical University of Munich (TUM), in this regard, has designed a new early warning AI-based system for autonomous vehicles. As per reliable reports, a study was recently carried out in association with the BMW Group, which was published in the journal IEEE Transaction on Intelligent Transportation Systems. The results of the study depicted that, when used in the self-driving vehicles of today, the system can warn seven seconds in advance, with an efficiency and accuracy of 85%, against critical situations that the cars cannot handle alone. Notably, the technology makes use of cameras and sensors to capture surrounding conditions and environment while simultaneously recording the data for vehicle such as road conditions, speed, visibility, and steering wheel angle. The AI-system then, based on recurrent neural network, learns to detect patterns with the procured data.


Concerns Using Machine Learning in Software Development - UrIoTNews

#artificialintelligence

There can be a propensity to view ML as a be-all, end-all solution, but it's not. It's imperative developers adhere to traditional SDLC protocols to produce quality products. Developing and optimizing an ML model has too many hyperparameters. It is hard to distinguish a failure of the method versus bad parameter choice. If the model is used for multiple tasks, it is hard to make sure incremental improvements for one tasks are not going to break others.


DADA-2000: Can Driving Accident be Predicted by Driver Attention? Analyzed by A Benchmark

Fang, Jianwu, Yan, Dingxin, Qiao, Jiahuan, Xue, Jianru, Wang, He, Li, Sen

arXiv.org Artificial Intelligence

Driver attention prediction is currently becoming the focus in safe driving research community, such as the DR(eye)VE project and newly emerged Berkeley DeepDrive Attention (BDD-A) database in critical situations. In safe driving, an essential task is to predict the incoming accidents as early as possible. BDD-A was aware of this problem and collected the driver attention in laboratory because of the rarity of such scenes. Nevertheless, BDD-A focuses the critical situations which do not encounter actual accidents, and just faces the driver attention prediction task, without a close step for accident prediction. In contrast to this, we explore the view of drivers' eyes for capturing multiple kinds of accidents, and construct a more diverse and larger video benchmark than ever before with the driver attention and the driving accident annotation simultaneously (named as DADA-2000), which has 2000 video clips owning about 658,476 frames on 54 kinds of accidents. These clips are crowd-sourced and captured in various occasions (highway, urban, rural, and tunnel), weather (sunny, rainy and snowy) and light conditions (daytime and nighttime). For the driver attention representation, we collect the maps of fixations, saccade scan path and focusing time. The accidents are annotated by their categories, the accident window in clips and spatial locations of the crash-objects. Based on the analysis, we obtain a quantitative and positive answer for the question in this paper.


Man vs. AI: these jobs are safe, for now... AndroidPIT

#artificialintelligence

Choose "Yes, I have!" or "Never heard of it.". AI is progressing exponentially year after year and the media often likes to exaggerate what this technology can do. But the truth is that, although this technology is actually making great strides forward, the use of an AI-only workforce is still in doubt. There are those that have spoken out about the potential for future AI applications in industry. Kai-Fu Lee, the author of bestselling book AI Superpowers: China, Silicon Valley, and the New World Order, told Dailymail.com

  Country:
  Industry: Information Technology (0.37)

The US military wants to teach AI some basic common sense

MIT Technology Review

Wherever artificial intelligence is deployed, you will find it has failed in some amusing way. Take the strange errors made by translation algorithms that confuse having someone for dinner with, well, having someone for dinner. But as AI is used in ever more critical situations, such as driving autonomous cars, making medical diagnoses, or drawing life-or-death conclusions from intelligence information, these failures will no longer be a laughing matter. That's why DARPA, the research arm of the US military, is addressing AI's most basic flaw: it has zero common sense. "Common sense is the dark matter of artificial intelligence," says Oren Etzioni, CEO of the Allen Institute for AI, a research nonprofit based in Seattle that is exploring the limits of the technology.